ClipText
A text that visualizes itself.
Text is a very convenient tool to describe ideas. However, video is much more attractive and convincing for ordinary people. For example, one of the reasons why KickStarter and the likes are successful, is because they have nice video representations of the ideas.
ClipText is an AI system that understands the meaning of your entered text, and comes up with its visualization (creates an animation video with all the components in the text, rendered as 4D objects).
As you type ClipText, the video content is pouring into life with every word and sentence you write. ^__^
This idea is for the times, when we have srong AI, but it is possible to start building such system today. For example, we already have libraries of thematic video clips, and we already have topic inference algorithms. Animation generation would be really a more involved problem, because of the necessity to come up with objects to match the ones in sentences, and their stories to represent the relationships between the ones in sentences.
An application: combining it with a voice-to-text recognizer, for visualization of thoughts.
A system that already has a sort of autogenerator, but we have to narrate the story ourselves, and there's no AI that would take your thought, and generate a story plot to visualize the thought. You have to create the story plot yourself.
Related. Similar strategies may help get closer to what ClipText describes.
We could approach that, but do we have something that makes a half-baked idea description without a story plot into a story plot? Currently, it is other people that do it, by putting some thought into it, and creating fun story-lines from an abstract thought, not computers yet.
Could use autosummarizers to decide what headlines to create in the generated video.